15 research outputs found

    A computational model of human trust in supervisory control of robotic swarms

    Get PDF
    Trust is an important factor in the interaction between humans and automation to mediate the reliance action of human operators. In this work, we study human factors in supervisory control of robotic swarms and develop a computational model of human trust on swarm systems with varied levels of autonomy (LOA). We extend the classic trust theory by adding an intermediate feedback loop to the trust model, which formulates the human trust evolution as a combination of both open-loop trust anticipation and closed-loop trust feedback. A Kalman filter model is implemented to apply the above structure. We conducted a human experiment to collect user data of supervisory control of robotic swarms. Participants were requested to direct the swarm in a simulated environment to finish a foraging task using control systems with varied LOA. We implement three LOAs: manual, mixed-initiative (MI), and fully autonomous LOA. In the manual and autonomous LOA, swarms are controlled by a human or a search algorithm exclusively, while in the MI LOA, the human operator and algorithm collaboratively control the swarm. We train a personalized model for each participant and evaluate the model performance on a separate data set. Evaluation results show that our Kalman model outperforms existing models including inverse reinforcement learning and dynamic Bayesian network methods. In summary, the proposed work is novel in the following aspects: 1) This Kalman estimator is the first to model the complete trust evolution process with both closed-loop feedback and open-loop trust anticipation. 2) The proposed model analyzes time-series data to reveal the influence of events that occur during the course of an interaction; namely, a user’s intervention and report of levels of trust. 3) The proposed model considers the operator’s cognitive time lag between perceiving and processing the system display. 4) The proposed model uses the Kalman filter structure to fuse information from different sources to estimate a human operator's mental states. 5) The proposed model provides a personalized model for each individual

    Deep Learning, transparency and trust in Human Robot Teamwork

    Get PDF
    For Autonomous AI systems to be accepted and trusted, the users should be able to understand the reasoning process of the system (i.e., the system should be transparent). Robotics presents unique programming difficulties in that systems need to map from complicated sensor inputs such as camera feeds and laser scans to outputs such as joint angles and velocities. Advances in Deep Neural Networks are now making it possible to replace laborious handcrafted features and control code by learning control policies directly from high dimensional sensor inputs. Because Atari games, where these capabilities were first demonstrated, replicate the robotics problem they are ideal for investigating how humans might come to understand and interact with agents who have not been explicitly programmed. We present computational and human results for making DRLN more transparent using object saliency visualizations of internal states and test the effectiveness of expressing saliency through teleological verbal explanations

    Hiding Leader's Identity in Leader-Follower Navigation through Multi-Agent Reinforcement Learning

    Get PDF
    Leader-follower navigation is a popular class of multi-robot algorithms where a leader robot leads the follower robots in a team. The leader has specialized capabilities or mission critical information (e.g. goal location) that the followers lack which makes the leader crucial for the mission's success. However, this also makes the leader a vulnerability - an external adversary who wishes to sabotage the robot team's mission can simply harm the leader and the whole robot team's mission would be compromised. Since robot motion generated by traditional leader-follower navigation algorithms can reveal the identity of the leader, we propose a defense mechanism of hiding the leader's identity by ensuring the leader moves in a way that behaviorally camouflages it with the followers, making it difficult for an adversary to identify the leader. To achieve this, we combine Multi-Agent Reinforcement Learning, Graph Neural Networks and adversarial training. Our approach enables the multi-robot team to optimize the primary task performance with leader motion similar to follower motion, behaviorally camouflaging it with the followers. Our algorithm outperforms existing work that tries to hide the leader's identity in a multi-robot team by tuning traditional leader-follower control parameters with Classical Genetic Algorithms. We also evaluated human performance in inferring the leader's identity and found that humans had lower accuracy when the robot team used our proposed navigation algorithm

    Human Theory of Mind Inference in Search and Rescue Tasks

    Get PDF
    The ability to make inferences about other’s mental state is referred to as having a Theory of Mind (ToM). Such ability is the foundation of many human social interactions such as empathy, teamwork, and communication. As intelligent agents being involved in diverse human-agent teams, they are also expected to be socially intelligent to become effective teammates. To provide a feasible baseline for future social intelligent agents, this paper presents a experimental study on the process of human ToM reference. Human observers’ inferences are compared with participants’ verbally reported mental state in a simulated search and rescue task. Results show that ToM inference is a challenging task even for experienced human observers

    Theory of Mind for Multi-Agent Collaboration via Large Language Models

    Full text link
    While Large Language Models (LLMs) have demonstrated impressive accomplishments in both reasoning and planning, their abilities in multi-agent collaborations remains largely unexplored. This study evaluates LLM-based agents in a multi-agent cooperative text game with Theory of Mind (ToM) inference tasks, comparing their performance with Multi-Agent Reinforcement Learning (MARL) and planning-based baselines. We observed evidence of emergent collaborative behaviors and high-order Theory of Mind capabilities among LLM-based agents. Our results reveal limitations in LLM-based agents' planning optimization due to systematic failures in managing long-horizon contexts and hallucination about the task state. We explore the use of explicit belief state representations to mitigate these issues, finding that it enhances task performance and the accuracy of ToM inferences for LLM-based agents.Comment: Accepted to EMNLP 2023 (Main Conference

    Human vs. Deep Neural Network Performance at a Leader Identification Task

    Get PDF
    Control of robotic swarms through control over a leader(s) has become the dominant approach to supervisory control over these largely autonomous systems. Resilience in the face of attrition is one of the primary advantages attributed to swarms yet the presence of leader(s) makes them vulnerable to decapitation. Algorithms which allow a swarm to hide its leader are a promising solution. We present a novel approach in which neural networks, NNs, trained in a graph neural network, GNN, replace conventional controllers making them more amenable to training. Swarms and an adversary intent of finding the leader were trained and tested in 4 phases: 1-swarm to follow leader, 2-adversary to recognize leader, 3-swarm to hide leader from adversary, and 4-swarm and adversary compete to hide and recognize the leader. While the NN adversary was more successful in identifying leaders without deception, humans did better in conditions in which the swarm was trained to hide its leader from the NN adversary. The study illustrates difficulties likely to emerge in arms races between machine learners and the potential role humans may play in moderating them

    Perceptions of Domestic Robots’ Normative Behavior Across Cultures

    Get PDF
    As domestic service robots become more common and widespread, they must be programmed to efficiently accomplish tasks while aligning their actions with relevant norms. The first step to equip domestic robots with normative reasoning competence is understanding the norms that people apply to the behavior of robots in specific social contexts. To that end, we conducted an online survey of Chinese and United States participants in which we asked them to select the preferred normative action a domestic service robot should take in a number of scenarios. The paper makes multiple contributions. Our extensive survey is the first to: (a) collect data on attitudes of people on normative behavior of domestic robots, (b) across cultures and (c) study relative priorities among norms for this domain. We present our findings and discuss their implications for building computational models for robot normative reasoning
    corecore